Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95% and 34.51% relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings.
translated by 谷歌翻译
Narrative summarization aims to produce a distilled version of a narrative to describe its most salient events and characters. Summarizing a narrative is challenging as it requires an understanding of event causality and character behaviors. To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset. It contains 122K narrative documents, which are collected from plot descriptions of movies and TV episodes with diverse genres, and their corresponding abstractive summaries. Experiments show that there is a large performance gap between humans and the state-of-the-art summarization models on NarraSum. We hope that this dataset will promote future research in summarization, as well as broader studies of natural language understanding and generation. The dataset is available at https://github.com/zhaochaocs/narrasum.
translated by 谷歌翻译
意见摘要是创建摘要的任务,以获取用户评论中的流行意见。在本文中,我们介绍了Geodesic Summarizer(GeoSumm),这是一种新型系统,可执行无监督的提取意见摘要。 GeoSumm涉及基于编码器的表示模型,该模型将文本表示为潜在语义单元的分布。 GeoSumm通过在多个解码器层上对预训练的文本表示进行字典学习来生成这些表示。然后,我们使用这些表示形式使用新型的基于测量距离的评分机制来量化审查句子的相关性。我们使用相关得分来确定流行意见,以构成一般和特定方面的摘要。我们提出的模型GeoSumm在三个意见摘要数据集上实现了最先进的性能。我们执行其他实验来分析模型的功能,并展示跨不同域{\ x}的概括能力。
translated by 谷歌翻译
机器学习系统通常被部署用于做出关键决策,例如信用贷款,招聘等。在做出决策时,此类系统通常会在其中间表示中对用户的人口统计信息(例如性别,年龄)进行编码。这可能会导致对特定人口统计的决定。先前的工作集中在中间表示方面,以确保公正的决策。但是,随着任务或人口统计分布的变化,这些方法无法保持公平。为了确保野外的公平性,对于系统来说,适应以渐进方式访问新数据的更改很重要。在这项工作中,我们建议通过在渐进学习环境中介绍学习公平表示的问题来解决此问题。为此,我们介绍了公平意识的增量表示学习(FAIRL),这是一种代表学习系统,可以维持公平,同时逐步学习新任务。 Fairl能够通过控制学习表示的速度延伸功能来实现公平和学习新任务。我们的经验评估表明,Fairl能够在目标任务上实现高性能的同时做出公正的决定,表现优于几个基线。
translated by 谷歌翻译
通过机器学习模型学到的文本表示通常编码用户的不良人口统计信息。基于这些表示形式的预测模型可以依靠此类信息,从而产生偏见的决策。我们提出了一种新颖的偏见技术,即公平意识的速率最大化(农场),该技术使用速率依赖函数来消除受保护的信息,以表示属于相同受保护的属性类别的实例不相关。Farm能够在有或没有目标任务的情况下进行辩论式表示。还可以适应农场同时删除有关多个受保护属性的信息。经验评估表明,Farm在几个数据集上实现了最新的性能,并且学会的表示形式泄漏了受保护的属性信息明显减少,以防止非线性探测网络攻击。
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
We introduce a pivot for exact selective inference with randomization. Not only does our pivot lead to exact inference in Gaussian regression models, but it is also available in closed form. We reduce the problem of exact selective inference to a bivariate truncated Gaussian distribution. By doing so, we give up some power that is achieved with approximate inference in Panigrahi and Taylor (2022). Yet we always produce narrower confidence intervals than a closely related data-splitting procedure. For popular instances of Gaussian regression, this price -- in terms of power -- in exchange for exact selective inference is demonstrated in simulated experiments and in an HIV drug resistance analysis.
translated by 谷歌翻译
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions -- deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
近年来,在数字病理应用中,在研究和临床环境中越来越普遍的部署这些模型的部署证明了在数字病理应用中的深度学习模型的开发方面取得了巨大进步。尽管此类模型在解决DP应用程序中的基本计算任务方面表现出了前所未有的表现,但在适应转移学习的看不见数据时,它们会遭受灾难性的遗忘。随着对深度学习模型的需求越来越多地处理不断变化的数据分布,包括不断发展的患者人群和新的诊断测定法,持续的学习模型减轻了模型忘记的遗忘,需要在基于DP的分析中引入。但是,据我们所知,没有针对DP特定应用的此类模型的系统研究。在这里,我们提出了DP设置中的CL方案,其中的组织病理学图像数据来自不同来源/分布,其知识已集成到单个模型中,而无需从头开始训练所有数据。然后,我们建立了一个用于结直肠癌H&E分类的增强数据集,以模拟图像外观的变化,并在拟议的CL方案中评估了CL模型性能。我们利用乳腺肿瘤H&E数据集以及结直肠癌来评估不同肿瘤类型的CL。此外,我们在注释和计算资源的限制下在在线几弹性设置中评估了CL方法。我们揭示了DP应用中CL的有希望的结果,这可能为这些方法在临床实践中的应用铺平了道路。
translated by 谷歌翻译